Sequential Minimal Optimization (SMO) is the traditional training algorithm for Support\nVector Machines (SVMs). However, SMO does not scale well with the size of the training set. For that\nreason, Stochastic Gradient Descent (SGD) algorithms, which have better scalability, are a better\noption for massive data mining applications. Furthermore, even with the use of SGD, training\ntimes can become extremely large depending on the data set. For this reason, accelerators such\nas Field-programmable Gate Arrays (FPGAs) are used. This work describes an implementation in\nhardware, using FPGA, of a fully parallel SVM using Stochastic Gradient Descent. The proposed\nFPGA implementation of an SVM with SGD presents speedups of more than 10,000* relative\nto software implementations running on a quad-core processor and up to 319* compared to\nstate-of-the-art FPGA implementations while requiring fewer hardware resources. The results show\nthat the proposed architecture is a viable solution for highly demanding problems such as those\npresent in big data analysis.
Loading....